Stochastic Zeroth-Order Riemannian Derivative Estimation and Optimization

نویسندگان

چکیده

We consider stochastic zeroth-order optimization over Riemannian submanifolds embedded in Euclidean space, where the task is to solve problems with only noisy objective function evaluations. Toward this, our main contribution propose estimators of gradient and Hessian from evaluations, based on a version Gaussian smoothing technique. The proposed overcome difficulty nonlinearity manifold constraint issues that arise using techniques when defined manifold. use following settings for function: (i) gradient-Lipschitz (in both nonconvex geodesic convex settings), (ii) sum nonsmooth functions, (iii) Hessian-Lipschitz. For these settings, we analyze oracle complexity algorithms obtain appropriately notions ?-stationary point or ?-approximate local minimizer. Notably, complexities are independent dimension ambient space depend intrinsic under consideration. demonstrate applicability by simulation results real-world applications black-box stiffness control robotics attacks neural networks. Funding: J. Li S. Ma acknowledge support National Science Foundation (NSF) [Grants DMS-1953210, CCF-1934568, CCF-2007797]. K. Balasubramanian acknowledges NSF [Grant DMS-2053918]. UC Davis CeDAR (Center Data Artificial Intelligence Research) Innovative Seed Funding Program.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Zeroth-order Optimization in High Dimensions

We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only loga...

متن کامل

On Zeroth-Order Stochastic Convex Optimization via Random Walks

We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n7T−1/2) after T queries for a convex bounded function f : R → R. The method is based on a random walk (the Ball Walk) on the epigraph of the function. The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations c...

متن کامل

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method pos...

متن کامل

A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order

Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the...

متن کامل

Zeroth-order Asynchronous Doubly Stochastic Algorithm with Variance Reduction

Zeroth-order (derivative-free) optimization attracts a lot of attention in machine learning, because explicit gradient calculations may be computationally expensive or infeasible. To handle large scale problems both in volume and dimension, recently asynchronous doubly stochastic zeroth-order algorithms were proposed. The convergence rate of existing asynchronous doubly stochastic zeroth order ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Operations Research

سال: 2023

ISSN: ['0364-765X', '1526-5471']

DOI: https://doi.org/10.1287/moor.2022.1302